Goto

Collaborating Authors

 kbc model


Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations

Chen, Yihong, Minervini, Pasquale, Riedel, Sebastian, Stenetorp, Pontus

arXiv.org Artificial Intelligence

Learning good representations on multi-relational graphs is essential to knowledge base completion (KBC). In this paper, we propose a new self-supervised training objective for multi-relational graph representation learning, via simply incorporating relation prediction into the commonly used 1vsAll objective. The new training objective contains not only terms for predicting the subject and object of a given triple, but also a term for predicting the relation type. We analyse how this new objective impacts multi-relational learning in KBC: experiments on a variety of datasets and models show that relation prediction can significantly improve entity ranking, the most widely used evaluation task for KBC, yielding a 6.1% increase in MRR and 9.9% increase in Hits@1 on FB15k-237 as well as a 3.1% increase in MRR and 3.4% in Hits@1 on Aristo-v4. Moreover, we observe that the proposed objective is especially effective on highly multi-relational datasets, i.e. datasets with a large number of predicates, and generates better representations when larger embedding sizes are used.


Knowledge Base Completion: Baseline strikes back (Again)

Jain, Prachi, Rathi, Sushant, Mausam, null, Chakrabarti, Soumen

arXiv.org Artificial Intelligence

Knowledge Base Completion has been a very active area recently, where multiplicative models have generally outperformed additive and other deep learning methods -- like GNN, CNN, path-based models. Several recent KBC papers propose architectural changes, new training methods, or even a new problem reformulation. They evaluate their methods on standard benchmark datasets - FB15k, FB15k-237, WN18, WN18RR, and Yago3-10. Recently, some papers discussed how 1-N scoring can speed up training and evaluation. In this paper, we discuss how by just applying this training regime to a basic model like Complex gives near SOTA performance on all the datasets -- we call this model COMPLEX-V2. We also highlight how various multiplicative methods recently proposed in literature benefit from this trick and become indistinguishable in terms of performance on most datasets. This paper calls for a reassessment of their individual value, in light of these findings.


Reasoning Over Paths via Knowledge Base Completion

Sudhahar, Saatviga, Roberts, Ian, Pierleoni, Andrea

arXiv.org Artificial Intelligence

This is crucial for the use of large Knowledge bases in many downstream applications. However explaining the predictions given by a KBC algorithm is quite important for several real world use cases. For example in rec-ommender systems, a knowledge graph of users, items and their interactions are used to recommend an item to a user based on the users interactions on several items. The ability to explain and reason on the decision is of critical importance to add knowledge to recommender systems. Similarly in a knowledge graph consisting human biological data such as genes, drugs, symptoms and diseases, it is crucial to know which gene and symptoms were involved in predicting a drug for a disease. This requires automatic extraction and ranking of multi-hop paths between a given source and a target entity from a knowledge graph. Previous work has focused on using path information in knowledge graphs for KBC known as path-based inference (Lao et al., 2011; Gardner et al., 2014; Neelakantan et al., 2015; Das et al., 2017b), in which a model is trained to predict missing links between a given pair of entities taking as input several paths that existed between them. Paths are ranked according to a scoring method and used as features to train the model. Embedding-based inference models (Bordes et al., 2013; Lin et al., 2015; Nickel et al., 2011; Socher et al., 2013; Trouillon et al., 2016) for KBC learn entity and relation embeddings by solving an optimization problem that maximises the plausibility of known facts in the knowledge graph.


Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference

Yoshikawa, Masashi, Mineshima, Koji, Noji, Hiroshi, Bekki, Daisuke

arXiv.org Artificial Intelligence

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data. However, there is a tradeoff between adding more knowledge data for improved RTE performance and maintaining an efficient RTE system, as such a big database is problematic in terms of the memory usage and computational complexity. In this work, we show the processing time of a state-of-the-art logic-based RTE system can be significantly reduced by replacing its search-based axiom injection (abduction) mechanism by that based on Knowledge Base Completion (KBC). We integrate this mechanism in a Coq plugin that provides a proof automation tactic for natural language inference. Additionally, we show empirically that adding new knowledge data contributes to better RTE performance while not harming the processing speed in this framework.